68 research outputs found

    Some New Results Concerning the Primal-Dual Path-Following Interior Point Algorithm for Linear Programming

    Get PDF
    The Primal-Dual (PD) path-following interior point algorithm for solving Linear Programming (LP) problems is considered. Firstly, we investigate its convergence and complexity properties when a new long-step linesearch procedure suggested by M. J. D.Powell is employed. Assuming that a primal-dual strictly feasible starting point is available and that the centring parameters are bounded away from zero, we show that the duality gap of the iterates tends to zero, thereby proving that the limit points of the sequence of iterates are solutions of the {LP} problem. Further, we consider whether the limit points of the sequence of iterates generated by some long-step variants of the {PD} algorithm coincide with the analytic centre of the primal-dual solution set of the problem. Because of the difficulty of the analysis involved, we restrict attention to the case when the standard form of the {LP} problem has one equality constraint and multiple solutions. We find that, when the centring parameters are bounded away from zero, the sequence of iterates does converge to the analytic centre. When the centring parameters tend to zero asymptotically at the same rate as the duality gap of the iterates, however, we show that in exact arithmetic the sequence of iterates may have other limit points in the solution set.\ud \ud The author was supported through grant GR/S34472 from the Engineering and Physical Sciences Research Council of the U

    A new and improved quantitative recovery analysis for iterative hard thresholding algorithms in compressed sensing

    Get PDF
    We present a new recovery analysis for a standard compressed sensing algorithm, Iterative Hard Thresholding (IHT) (Blumensath and Davies, 2008), which considers the fixed points of the algorithm. In the context of arbitrary measurement matrices, we derive a sufficient condition for convergence of IHT to a fixed point and a necessary condition for the existence of fixed points. These conditions allow us to perform a sparse signal recovery analysis in the deterministic noiseless case by implying that the original sparse signal is the unique fixed point and limit point of IHT, and in the case of Gaussian measurement matrices and noise by generating a bound on the approximation error of the IHT limit as a multiple of the noise level. By generalizing the notion of fixed points, we extend our analysis to the variable stepsize Normalised IHT (N-IHT) (Blumensath and Davies, 2010). For both stepsize schemes, we obtain asymptotic phase transitions in a proportional-dimensional framework, quantifying the sparsity/undersampling trade-off for which recovery is guaranteed. Exploiting the reasonable average-case assumption that the underlying signal and measurement matrix are independent, comparison with previous results within this framework shows a substantial quantitative improvement

    A new perspective on the complexity of interior point methods for linear programming

    Get PDF
    In a dynamical systems paradigm, many optimization algorithms are equivalent to applying forward Euler method to the system of ordinary differential equations defined by the vector field of the search directions. Thus the stiffness of such vector fields will play an essential role in the complexity of these methods. We first exemplify this point with a theoretical result for general linesearch methods for unconstrained optimization, which we further employ to investigating the complexity of a primal short-step path-following interior point method for linear programming. Our analysis involves showing that the Newton vector field associated to the primal logarithmic barrier is nonstiff in a sufficiently small and shrinking neighbourhood of its minimizer. Thus, by confining the iterates to these neighbourhoods of the primal central path, our algorithm has a nonstiff vector field of search directions, and we can give a worst-case bound on its iteration complexity. Furthermore, due to the generality of our vector field setting, we can perform a similar (global) iteration complexity analysis when the Newton direction of the interior point method is computed only approximately, using some direct method for solving linear systems of equations

    Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

    Full text link
    We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We show that in terms of the order of the accuracy, the evaluation complexity of such a method is the same as its counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. We particularize and improve these results in the convex and strongly convex case. We also analyze a probabilistic cubic regularization variant that allows approximate probabilistic second-order models and show improved complexity bounds compared to probabilistic first-order methods; again, as a function of the accuracy, the probabilistic cubic regularization bounds are of the same (optimal) order as for the deterministic case

    Some Disadvantages of a Mehrotra-Type Primal-Dual Corrector Interior Point Algorithm for Linear Programming

    Get PDF
    The Primal-Dual Corrector (PDC) algorithm that we propose computes on each iteration a corrector direction in addition to the direction of the standard primal-dual path-following interior point method (Kojima et al., 1989) for Linear Programming (LP), in an attempt to improve performance. The new iterate is chosen by moving along the sum of these directions, from the current iterate. This technique is similar to the construction of Mehrotra's highly popular predictor-corrector algorithm (Mehrotra, 1991). We present examples, however, that show that the PDC algorithm may fail to converge to a solution of the LP problem, in both exact and finite arithmetic, regardless of the choice of stepsize that is employed. The cause of this bad behaviour is that the correctors exert too much influence on the direction in which the iterates move.\ud \ud The author was supported through grant GR/S34472 from the Engineering and Physical Sciences Research Council of the U

    On the Convergence of a Primal-Dual Second-0rder Corrector Interior Point Algorithm for Linear Programming

    Get PDF
    The Primal-Dual Second Order Corrector (PDSOC) algorithm that we investigate computes on each iteration a corrector direction in addition to the direction of the standard primal-dual path-following interior point method (Kojima et al, 1989) for Linear Programming (LP), in an attempt to improve performance. The corrector is multiplied by the square of the stepsize in the expression of the new iterate. While the outline of the PDSOC algorithm is known (Zhang et al, 1995), we present a substantive theoretical interpretation of its construction. Further, we investigate its convergence and complexity properties, provided that a primal-dual strictly feasible starting point is available. Firstly, we use a new long-step linesearch technique suggested by M J D Powell, and show that, when the centring parameters are bounded away from zero, the limit points of the sequence of iterates are primal-dual strictly complementary solutions of the LP problem. We consider also the popular choice of letting the centring parameters be of the same order as the duality gap of the iterates, asymptotically. A standard long-step linesearch is employed to prove that the sequence of iterates converges to a primal-dual strictly complementary solution of the LP problem, which may not be the analytic centre of the primal-dual solution set as further shown by an example.\ud \ud The author was supported through grant GR/S34472 from the Engineering and Physical Sciences Research Council of the U

    Finding a point in the relative interior of a polyhedron

    Get PDF
    A new initialization or `Phase I' strategy for feasible interior point methods for linear programming is proposed that computes a point on the primal-dual central path associated with the linear program. Provided there exist primal-dual strictly feasible points - an all-pervasive assumption in interior point method theory that implies the existence of the central path - our initial method (Algorithm 1) is globally Q-linearly and asymptotically Q-quadratically convergent, with a provable worst-case iteration complexity bound. When this assumption is not met, the numerical behaviour of Algorithm 1 is highly disappointing, even when the problem is primal-dual feasible. This is due to the presence of implicit equalities, inequality constraints that hold as equalities at all the feasible points. Controlled perturbations of the inequality constraints of the primal-dual problems are introduced - geometrically equivalent to enlarging the primal-dual feasible region and then systematically contracting it back to its initial shape - in order for the perturbed problems to satisfy the assumption. Thus Algorithm 1 can successfully be employed to solve each of the perturbed problems.\ud We show that, when there exist primal-dual strictly feasible points of the original problems, the resulting method, Algorithm 2, finds such a point in a finite number of changes to the perturbation parameters. When implicit equalities are present, but the original problem and its dual are feasible, Algorithm 2 asymptotically detects all the primal-dual implicit equalities and generates a point in the relative interior of the primal-dual feasible set. Algorithm 2 can also asymptotically detect primal-dual infeasibility. Successful numerical experience with Algorithm 2 on linear programs from NETLIB and CUTEr, both with and without any significant preprocessing of the problems, indicates that Algorithm 2 may be used as an algorithmic preprocessor for removing implicit equalities, with theoretical guarantees of convergence
    • …
    corecore